# Instruction following

Hyperclovax SEED Text Instruct 0.5B
Other
A Korean-optimized text generation model with instruction-following capability, featuring lightweight design suitable for edge device deployment
Large Language Model Transformers
H
naver-hyperclovax
7,531
60
Gemma 3 4b Persian V0
Apache-2.0
A Persian-specific model based on the Gemma 3 architecture, utilizing QLoRA for 4-bit quantization, focused on Persian text generation and understanding
Large Language Model Other
G
mshojaei77
542
9
Lamarckvergence 14B
Apache-2.0
Lamarckvergence-14B is a pre-trained language model merged via mergekit, combining Lamarck-14B-v0.7 and Qwenvergence-14B-v12-Prose-DS. It ranks first among models with fewer than 15B parameters on the Open LLM Leaderboard.
Large Language Model Transformers English
L
suayptalha
15.36k
24
Persianllama 13B
The first groundbreaking large language model for Persian, with 13 billion parameters, trained on the Persian Wikipedia corpus, specifically designed for various natural language processing tasks.
Large Language Model Transformers Other
P
ViraIntelligentDataMining
3,291
11
Mobillama 1B Chat
Apache-2.0
MobiLlama-1B-Chat is an instruction-following model fine-tuned from MobiLlama-1B, specifically designed for resource-constrained devices, emphasizing efficiency, low memory footprint, and fast response.
Large Language Model Transformers English
M
MBZUAI
44
25
Fialka 13B V4
Apache-2.0
The Violet series language models are specifically trained for instruction following and maintaining Russian dialogues. The fourth generation has been optimized with RLHF, offering stronger response capabilities and richer information content.
Large Language Model Transformers Other
F
0x7o
95
5
Boana 7b Instruct
Boana-7B-Instruct is a Portuguese instruction fine-tuned model based on LLaMA2-7B, specifically designed for Portuguese-speaking users, offering a lower-complexity LLM option.
Large Language Model Transformers Other
B
lrds-code
24
5
Selfrag Llama2 7b
MIT
A 7-billion-parameter Self-RAG model capable of generating outputs for diverse user queries, adaptively invoking retrieval systems, self-criticizing outputs and retrieved passages, while generating reflection tokens.
Large Language Model Transformers
S
selfrag
1,318
78
Llama 2 70b Fb16 Korean
A version fine-tuned on Korean datasets based on the Llama2 70B model, focusing on Korean and English text generation tasks
Large Language Model Transformers Supports Multiple Languages
L
quantumaikr
127
37
Stable Vicuna 13B GPTQ
StableVicuna-13B is a dialogue model fine-tuned via RLHF based on Vicuna-13B v0, using 4-bit GPTQ quantization format
Large Language Model Transformers English
S
TheBloke
49
219
Stable Vicuna 13b Delta
StableVicuna-13B is a fine-tuned version of the Vicuna-13B v0 model, enhanced through Reinforcement Learning from Human Feedback (RLHF) and Proximal Policy Optimization (PPO) on various dialogue and instruction datasets.
Large Language Model Transformers English
S
CarperAI
31
455
Godot Dodo 4x 60k Llama 7b
Instruction-following model fine-tuned on LLaMA-7B, specifically optimized for code instruction scenarios
Large Language Model Transformers
G
minosu
39
4
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase